Goto

Collaborating Authors

 identifiable conditional energy-based deep model


ICE-BeeM: Identifiable Conditional Energy-Based Deep Models Based on Nonlinear ICA

Neural Information Processing Systems

We consider the identifiability theory of probabilistic models and establish sufficient conditions under which the representations learnt by a very broad family of conditional energy-based models are unique in function space, up to a simple transformation. In our model family, the energy function is the dot-product between two feature extractors, one for the dependent variable, and one for the conditioning variable. We show that under mild conditions, the features are unique up to scaling and permutation. Our results extend recent developments in nonlinear ICA, and in fact, they lead to an important generalization of ICA models. In particular, we show that our model can be used for the estimation of the components in the framework of Independently Modulated Component Analysis (IMCA), a new generalization of nonlinear ICA that relaxes the independence assumption. A thorough empirical study show that representations learnt by our model from real-world image datasets are identifiable, and improve performance in transfer learning and semi-supervised learning tasks.

  ice-beem, identifiable conditional energy-based deep model, name change, (5 more...)

Review for NeurIPS paper: ICE-BeeM: Identifiable Conditional Energy-Based Deep Models Based on Nonlinear ICA

Neural Information Processing Systems

Weaknesses: This work falls on the periphery of my area of expertise, so I'll try to offer the perspective of someone who is interested and somewhat familiar with this family of methods, but who does not have intricate knowledge of it. I agree that seeking identifiable representations, as defined in the manuscript, is a worthwhile goal, and a promising way to improve downstream processing. The authors came very close to proving full identifiability for the proposed model family, having achieved weak (linear mapping) and strong (permutation) identifiability conditions. However, I'm not sure I follow why strong identifiability represents a substantial step after weak identifiability, in the sense that I struggle to think of a downstream task that can handle strong representations, but not weak. As I understand it, the idea of identifiability explored here was introduced in Khemakem er at.


ICE-BeeM: Identifiable Conditional Energy-Based Deep Models Based on Nonlinear ICA

Neural Information Processing Systems

We consider the identifiability theory of probabilistic models and establish sufficient conditions under which the representations learnt by a very broad family of conditional energy-based models are unique in function space, up to a simple transformation. In our model family, the energy function is the dot-product between two feature extractors, one for the dependent variable, and one for the conditioning variable. We show that under mild conditions, the features are unique up to scaling and permutation. Our results extend recent developments in nonlinear ICA, and in fact, they lead to an important generalization of ICA models. In particular, we show that our model can be used for the estimation of the components in the framework of Independently Modulated Component Analysis (IMCA), a new generalization of nonlinear ICA that relaxes the independence assumption.

  ice-beem, identifiable conditional energy-based deep model, nonlinear ica, (2 more...)